#AI Connect coding platform
Explore tagged Tumblr posts
Text
1 note
·
View note
Text
Matthew Bernardini, CEO and Co-Founder of Zenapse – Interview Series
New Post has been published on https://thedigitalinsider.com/matthew-bernardini-ceo-and-co-founder-of-zenapse-interview-series/
Matthew Bernardini, CEO and Co-Founder of Zenapse – Interview Series
Matthew Bernardini is the CEO and Co-Founder of Zenapse, where he leads the company’s vision and oversees the development of its proprietary AI foundation model into category-leading products. With a background as a product marketer, data strategist, and technologist, he brings a blend of entrepreneurial experience—having achieved four successful exits—and corporate expertise from organizations such as JPMorgan Chase, Omnicom, and Capgemini.
Throughout his career, Bernardini has maintained a strong interest in artificial intelligence, psychology, consumer behavior, game theory, and statistics, which continue to inform his leadership at Zenapse.
Zenapse is an AI-driven platform that boosts customer acquisition, engagement, and retention through emotionally intelligent experiences. Powered by the world’s first Large Emotion Model (LEM), Zenapse uses psychographic insights and goal-based optimization to help brands connect more deeply with audiences. Fast to deploy and easy to use, it delivers measurable results in hours—not weeks—while reducing costs and increasing ROI.
Zenapse is built around the intersection of emotional intelligence and AI. What was the ‘aha’ moment that led to the creation of the Large Emotion Model (LEM)?
Zenapse has a veteran founding team with backgrounds in the product development, advertising, marketing, and customer experience spaces, with more than 100 years of combined experience at companies like Capgemini, Omnicom, and JP Morgan Chase. Over our careers, we’ve seen a new paradigm shift emerge for marketers, where AI has changed how we think about and engage with consumers.
In today’s fast-paced digital landscape, customers expect personalized and resonant experiences across all touchpoints, but traditional marketing solutions lack the speed and insights needed for real-time decision-making and struggle to meet these expectations. Simultaneously, from product decisions to advertising campaigns, leaders struggle with the high cost of hiring multiple team members to complete this work.
To address this need, we’ve built the world’s first Large Emotion Model (LEM), which helps marketers increase revenue and sales by bringing emotional intelligence into their consumers’ experience. By orienting their communication towards what is of value and interest to consumers, rather than a single “brand-first” message, brands can create more meaningful interactions that lead to higher engagement, sales, retention, and customer acquisition.
How do you define a Large Emotion Model (LEM), and how does it differ technically and functionally from a traditional Large Language Model (LLM)?
Our Large Emotion Model (LEM) is a predictive AI engine powered by a dataset built on knowledge of more than 200 million consumers with 6 billion datapoints. Through AI-driven psychographic insights (i.e., beliefs, sentiments, and emotions), companies can understand what motivates their customers to convert – whether that’s the features or benefits of a product, special promotions and incentives, imagery or calls to action, then allowing them to prioritize the brand experience content to a consumer’s preference.
In contrast to our LEM, which focuses on emotion and behavior, large language models (LLMs) focus on text and functions related to natural language processing (NLP) without deeper insights into what different segments of audiences believe and value.
We’ve worked closely with Google, through their Google Startup and Google Cloud Marketplace programs, as well as Comcast Lift Labs, to ensure that our solution is enterprise-ready and meets the needs of the world’s most demanding marketers.
Why do you believe emotional intelligence is the “missing link” in most marketing AI platforms today?
The simple answer is that marketers have not been able to truly understand their customers because existing legacy technology focuses on demographics and behavior. We seamlessly integrate with tools from companies such as Adobe, Salesforce, and Google to deliver extraordinary results.
95% of consumer decisions are subconscious and driven by emotion. Yet, for decades, brands have used demographic (e.g., zip code, race, income) and behavioral data to inform marketing campaigns. While this type of data has its uses, most purchase decisions are driven by emotions, which these data points fail to capture. As a result, marketers struggle with limited accuracy and effectiveness, often resorting to generalized solutions.
Now, through our LEM, brands can tap into psychographic insights to build this full picture and increase sales and revenue. The proof of concept for emotional intelligence’s role in marketing lies in the numbers: we’re helping household-name brands increase conversion rates by 40-400% and engagement upwards of 80%.
What are the most common misconceptions you see around AI’s role in understanding human emotion?
One of the biggest misconceptions is that AI is here to replace marketers. At Zenapse, we’re taking a different approach – we’re helping marketers develop marketing and advertising with emotional intelligence and AI that helps them diversify their perspectives through the ability to connect and understand their customers on a deeper, more emotional level.
Traditional campaigns have often relied on lumping consumers into broad categories defined by demographics, like age, income, and zip code, which ignores the nuances of what humans truly care about. With our LEM, marketers can align campaigns around what matters most to each person.
Instead of guessing what might resonate, our platform helps marketers confidently create experiences that truly resonate because it’s built on a foundation of emotional intelligence. That’s not replacing the human touch – it’s making it stronger.
In your view, what separates hype from true innovation in the AI + EQ space right now?
We’re entering a new era of marketing that’s defined by emotionally intelligent experiences, not surface-level personalization.
Consumer behavior has changed dramatically. The majority of consumers now prefer personalized experiences – they expect brands to know what they care about. This presents an opportunity for brands to leverage AI in a way that creates deeper connections with their consumers.
The difference between hype and true innovation is the quality of data. Our LEM is built on knowledge of 300 million consumers and six billion real-time data points, which gives brands a comprehensive understanding of who their consumers are – something they couldn’t have done before now.
What types of psychographic signals and real-time data power the LEM, and how are these modeled into the Data Lake?
The psychographics behind our LEM are based on four pillars:
Beliefs – we group beliefs into individual categories, including how they value things like money, knowledge, family, and belonging, among others
Emotions – think about how you react after seeing an ad or promotion. Does it bring you joy or make you anxious?
Activities – from gardening to gaming, we account for all different types of real-world and digital activities
Behaviors – the events and actions a consumer performs in a company’s experiences, such as completing a form, watching a video, or making a purchase.
Consumers make buying decisions with their hearts as much as with their minds, so we know that addressing the emotional component is the key to unlocking real value across the entire customer lifecycle.
LEM is described as leveraging 6+ billion data points across 300M+ consumers. What safeguards and ethical considerations are in place to ensure privacy and transparency?
Privacy is the center of our product development. Our entire technology ecosystem is SOC2 compliant, and our dataset does not capture or retain any consumer personally identifiable information (PII). Our data is aggregated and anonymized. We also maintain clear internal policies and governance practices to ensure ethical use of AI in every step of development.
Can you walk us through the role of ZenCore, ZenInsight, and ZenVision in powering emotionally intelligent customer experiences?
ZenCore is our proprietary consumer psychographic model and the engine that powers our LEM. ZenInsight is the data foundation of emotionally intelligent experiences. ZenVision, in real time, translates these insights into predictions on which messaging or content will resonate with a given psychographic segment and provides actionable recommendations for marketers. Together, these tools form a full-stack solution for marketing with emotional intelligence.
How does Zenapse adapt emotional predictions across verticals like retail, telecom, and healthcare? Are there any surprising industry use cases?
We’re already working with companies like Comcast, Sam’s Club, Aeropostale, Bread Financial, Bayada Education and Action Karate to improve conversion rates of digital brand experiences by 40-400%. While the emotional drivers vary by vertical, the framework remains consistent: we decipher what matters to a given consumer and help brands align their experiences accordingly.
What’s your long-term vision for LEM—do you see it evolving beyond marketing into other domains like healthcare or education?
Right now, we’re focused on using AI to help marketers and advertisers better relate to their customers, and as our data continues to get better over time, so too will our LEM. We have recently extended the platform beyond websites to support CTV through our partnership with LG Ad Solutions and their innovation lab. Our goal is to extend our platform key consumer touchpoints by 2028 – video games, automobiles and connected homes to name a few.
How do you see emotionally intelligent AI reshaping the next decade of digital experiences?
The ability to deliver real-time, hyper-personalized experiences across all digital platforms is already more powerful than ever, creating new opportunities for partnerships. AI and emotional intelligence will continue to be adopted, and as these technologies and insights become increasingly sophisticated, they will be the driving force behind marketing efforts across all digital media.
Our team is working hard to stay ahead of this curve. We recently announced our partnership with LG Ad Solutions’ Innovation Labs to help CTV advertisers deliver emotionally intelligent experiences across LG’s ecosystem of 200 million smart TVs, and we’re working to bring our insights to other screens, like web, mobile, AVs, music, movies, connected cars, and more
We see the future of digital experiences being shaped by AI and emotional intelligence. Businesses that fail to adapt to this shift risk being left behind by competitors who are quicker to respond to the changes in consumer preferences and behaviors.
Thank you for the great interview, readers who wish to learn more should visit Zenapse.
#acquisition#adobe#advertising#ai#AI platforms#approach#artificial#Artificial Intelligence#Automobiles#background#Behavior#behavioral data#billion#brands#bread#Capgemini#Capture#career#Careers#Cars#CEO#Cloud#code#comcast#communication#Companies#comprehensive#Connected cars#consumer behavior#consumers
0 notes
Note
Do you have any new-creator of comics advice? I’m not sure what platforms to post on or how to go about it. Thank you. :)
Sadly there are less and less broad platforms available for posting independent comics on these days. I absolutely advise you to stay away from Webtoons and DeviantArt at this point, they're toxic sludgy messes. I would say you have a few main options, depending on what you like -
Instagram: it's good for getting attention relatively quickly, but the algorithim is quite terrible for artists and the insurgence of AI is also pretty bad. There's a strong community of Warriors OC comics on Instagram which may help if that's your genre, but it's rough for totally original comic work. Use a lot of hashtags on your posts if you do decide to post there.
Original website: if you're good with coding, or you know someone who is, you could make your own website to post comics on. Obviously it's hard to get algorithmic traffic from this, but if you're active on another platform you can advertise yourself. Plus you then have total control over what goes on with the website.
Tumblr: I've found that posting comics on Tumblr is pretty nice in algorithmic terms, and you get a lot more community interaction here than on some platforms. But it is pretty hard for new readers to figure out its layout and keep everything streamlined, and the website isn't super reliable all the time.
ComicFury: ComicFury is the best out of all these options and my strongest recommendation. It is AI-free, lets your build and design your own host website connected to the main site, and you have control over everything you upload to it. There's not a push algorithim, but if you post frequently you will get your comic into the "recently updated" feed and you will slowly build an audience. You need to have patience when creating a webcomic anyways; so I would say start here.
186 notes
·
View notes
Text
Well. I have a feeling I'm about to have a million new followers. (March 31st, 2025; not an April Fool's joke, unless Nanowrimo has very poor taste and timing)
youtube
Here's a link that explains in long video format the whole entire thing in detail:
youtube
and to sum it up:
This blog was made as an Anti-Generative AI to Nanowrimo, as well as a way to actually build a friendly, low-pressure, helpful community of aspiring writers, without the hard-fast-do-it-or-die pressure brought on by nanowrimo.
There is no official "contest" -- only a community coming together to inspire each other to write, help out with motivation by setting community goals, keeping participation motivation via Trackbear.app, etc!
The most popular writing challenge is still November for most people, but I myself have also started to keep a year-round, daily writing goal of 444 via the website 4thewords, which has been an extreme help in getting me to write a little at a time.
This year has been very hectic for everyone what with the election results so I haven't been very active on tumblr (I think everyone can understand that) but I was originally planning on also having each month of the year being a different themed writing / art challenge but got a bit distracted real life.
So, what is the Novella November Challenge?
It's a fun challenge where writers come together to write 30,000 (or your own personal writing goal!) words in 30 days, sharing tips, writing advice, plot ideas, accessibility aids, and committing to having fun while explicitly fighting back against Generative AI by using our own words and disavowing the use of scraping and generating to take away the livelyhoods of artists of all spectrums, and proving everyone who insists "generative AI is an accessibility tool" wrong by committing to our creative visions and making it easier for everyone to find the tools they need to succeed by sharing tips, free programs, and finding a like-minded community to support you! 💙
There is no official website, there is no required place to show your participation, this is a community initiative that will never be monetized by predatory sponsors or dangerous moderators abusing their power.
This blog is here to inspire everyone, regardless of experience level, to write and create the story they want to tell, in their own words, while striving to remain a fun, low-pressure challenge that doesn't turn into a stressful spiral, like often happened with Nano.
Want to start writing but not sure how? Don't have money to spend on expensive writing programs? Have no fear!
LibreOffice: An always free, open-source alternative to Microsoft Word (and Microsoft's other office suits)
4Thewords: A website (both desktop and mobile web browser) that syncs your writing cross platform to the cloud, with built-in daily word goals, streak tracking, and you can fight monsters with your word count to game-ify writing!
Trackbear: A website dedicated to tracking your writing, setting custom goals, and creating leaderboards for community participation; you can join the year-long community leaderboard with the Join Code "f043cc66-6d5d-45b2-acf1-204626a727ba" and a November-limited one will release on November 1st as well.
Want to use Text to Speech to dictate your novel?
Most modern phones have a built-in option available on your keyboard settings which can be used on any writing program on your phone, and most modern PCs that allow a microphone (including headphone) connection has some kind of native dictation function, which you can find by opening your start panel and searching your computer for "Speech to text" or "voice to text".
Want to write while on the go, but don't want to / can't use the small phone keyboard to type, or speech to text?
You can, for as cheap as $40, buy a bluetooth keyboard that you can pair with your smart phone or tablet and use to write in any and all writing applications on your phone -- this allows you to write on the goal (especially using cross-platform websites or services, like 4thewords or google docs) , and the small screen can also help minimize distractions by muting notifications in your writing time.
#novella november#nanowrimo#large text#writing events#national novel writing month#community events#anti ai#novellanovember#Sam Beckett Voice: Oh boy#long post#Youtube
168 notes
·
View notes
Text

The startup Eight Sleep Inc. makes a temperature-controlled, water-filled mattress cover system popular with Silicon Valley execs and body optimizers who say that sleeping at the perfect temperature gives them the ideal rest. The bed cover costs more than $2,000 and requires an internet connection to work. To power the temperature adjustments – which the company now says can be finessed with AI insights – Eight Sleep beds need to be online. But one researcher says he’s found ways that Eight Sleep’s engineers can theoretically snoop on customers’ bed activity. He says it’s just the latest example of the way tech companies today are often pushing everyday products to be overly engineered, unnecessarily internet-connected and reliant on a recurring subscription. Dylan Ayrey, the co-founder and chief executive officer of Truffle Security Co., said he initially bought an Eight Sleep system to help with insomnia. He joins users such as Meta Platforms Inc. CEO Mark Zuckerberg, biohacker Bryan Johnson and Andrew Huberman, the tech industry’s favorite health guru. Elon Musk has also praised the bed. (The admiration is apparently mutual: Eight Sleep CEO Matteo Franceschetti shipped bed covers to DOGE this month and wrote on X, “@eIonmusk us if you need more.”) When Ayrey looked at the bed’s firmware, he was surprised to see that it appeared to have a backdoor that would allow the company’s engineers to remote into any bed and run code on it without oversight. Ayrey hypothesized that, for example, if your ex worked at Eight Sleep, they could find out when you’re sleeping at home – or when you’re not – and whether you’re sleeping alone or with someone else. He compared it to Uber Technologies Inc.’s controversial “God View,” an internal system in which employees previously could track individual riders using their service. It also evokes the way thousands of Amazon.com Inc. employees could listen to sound clips recorded through Alexa devices.
[ Source ]
119 notes
·
View notes
Text
We always thought we were alone out there. Not in the galaxy—no, that dream died fast. I mean alone… in ourselves. Human.
Centuries ago, we broke Earth’s gravity with nothing but desperation and data. We were running—from ruin, from rot, from each other. But we didn’t stop at the stars. We colonized them, carved cities into comets, hung solar farms between moons, called it home.
But it wasn’t just our bodies that changed out here. It was our minds.
Pluto was the furthest reach—the quiet end of a dying signal. They built Eridia there: a haven for thinkers, neuralists, soul-engineers. They studied what space does to the human psyche. And they found something.
They called it "The Hunger" A psychic sickness. A rupture in the way we connect. It spread like a system glitch—slow, silent, and deep within humanity. Affection became dangerous. Touch became lethal.
So they rewrote humanity—dampeners, inhibitors, neural locks. No more empathy spikes, no more entanglement, no more touching. It worked for a while. The Hunger hasn’t gone away; it has evolved. And those who feel too much… burn out.
You shouldn’t be alive. And yet—here you are. You weren’t born with the Hunger. With your own motivations in mind, you travel to Eridia, seeking answers about the one thing only you have.
Your hopes are to The Pantheon Circuit; A religious-techno body worshipping the ancient pre-human code—fragments of consciousness scattered through the galaxy.
Chose your backstory;
✩ The Conduit
You were wired to a forgotten AI-god, left floating in the void. They asked questions no one else could hear. You gave answers the system feared. People treated you as a seer, a signal booster, a danger to system control. You escaped before they could erase you.
✩ The Drifted
They found you in a half-dead cryo-pod, memory fogged. You wore a military tag that doesn’t exist in any records. As you traveled with your saviours, someone redirected your ship, causing you to crash into a nearby moon. Every crew member, and every record of their findings died. All but you.
✩ The Vessel
Biotech-enhanced and artificially immune to “the hunger” by design. Someone tried to build a cure into you, and you killed them getting out. Your "mother" found and took you in, but she's colapsing under the Hunger, and you leave to find help
You crash-land on Pluto with a celestial train, and are discovered by a rogue AI that was smuggled into Eridia.
Chose your Love Interest;
✩ Ais
Code Shaman — repairs forbidden AIs, speaks with machines, implants psychic firewalls.
Talks about Ȩ̴̻͚̟̳̬̣̮̿̀̈́̋̑̿̀̐̅̂̈́̄ȑ̷̡̢̢̝̬͔͚͔̲̯͖̜͊͊́͛̑̔̑̓͐̄͂̅͝͝o̵͈̙̩̍̓͐͋̅̉̊̔c̸͕̖͕͛̐͂̉̏͗̀̓͑͂̽͘
✩ Leander
Sensory Dealer — runs simulated emotion dens, trades stolen memories, fakes affection until yours feels real.
✩ Kuras
Ex-Pantheon Ascendant — a spiritual anchor turned apostate, carries forbidden relics from the Core
✩ Mhin
Scavver — builds illegal augment limbs, hides in The Drift’s ghost tunnels, allergic to vulnerability.
✩ Vere
Phantom-Operative — genetically altered for silence and cruelty, works for The Pantheon Circuit
Other; The Spire & The Drift
"Up there, they breathe clean air. Down here, we survive."
✩ The Spire: A tower city scraping the dome’s edge, flooded with reflective chrome and corporate cults. Rich in synthetic light, dead in soul.
✩ The Drift: Underground, near the reactor slums. Neon gutters, rusted platforms, mod markets. People here splice their DNA for coin or survival.
#verethinks#verewrites#red spring studios#touchstarved#ts#touchstarved game#touchstarved headcanons#touchstarved oneshot#ais#ts ais#ais touchstarved#touchstarved ais#vere#ts vere#vere touchstarved#touchstarved vere#mhin#mhin headcanons#ts mhin#mhin touchstarved#touchstarved mhin#mhin oneshot#kuras#ts kuras#kuras touchstarved#touchstarved kuras#leander#ts leander#leander touchstarved#touchstarved leander
29 notes
·
View notes
Text
Weekend links, June 8, 2025
My posts
Seasonal reblog: a post about Donna Summer and Disco Demolition Night.
I seem to have crashed a bit and taken a week I couldn't really spare to rest a little (while still going to physical therapy three times). This compilation of Maria getting in my way for five and half minutes shows a bit of the next SH2 commentary, but that's about as far as I've gotten.
I am now developing quite a little Steam library of deep-discount games I have no time to play, as is traditional. This week I learned that the Epic platform is a thing, and also that it has Alan Wake 2 as an exclusive, or else I would own the latter by now. The rest of Alan Wake, I got for $5. When will I play all these games? Nobody knows, including my physical therapist who wants me to get up and stretch every fifteen minutes.
Meanwhile, Ian's band has a new YouTube channel; he talks about a special song here. He doesn't know when I have time to play these games either; he's the one who got me to buy Silent Hill 4 off GOG.com.
Reblogs of interest
Remembering Marsha P. Johnson, Stonewall, and her activism (I hadn't seen the Pay It No Mind arch before; it's beautiful).
Remembering muppeteer Richard Hunt ("he originated the characters of Scooter, Beaker, Statler, Sweetums, and Wayne, but also became the primary performer of Janice and is responsible for the flower child personality she is now known for"), a joyous performer lost to AIDS in 1992.
Sir Ian McKellen on the trans community: "The connection between us all is we come under the queer umbrella – we are queer. [...] The problems that transgender people have with the law are not dissimilar from what used to be the case for us, so I think we should all be allies really."
Writings on what queer masculinity can be
Aro Books For Pride
Community support isn't rainbow capitalism
Disability aids from Active Hands (here's the website; I haven't tried their products, but the posters are very happy with them)
A tale of two Mondays: sweet and beautiful and weary of life at age 0.
(Not What I'm Called: Manul Edition)
"Builder.AI just declared bankruptcy after admitting that they were faking their AI tool with 700 humans"
CatGPT is just as reliable.
Xuanji Tu, the Chinese poem that can be read 8,000 ways
"sometimes you just want to look at the qing dynasty jadeite cabbage again"
Poll: Which setting is sexier, lighthouse or clock tower?
Tallulah Bankhead: blonde and ambisextrous
Beautifully colored dice, and also, it's a painting
Always reblog the sunwoof
Out-of-focus summer fun
Perfectly synchronized with mama (I realized that "let's eat shit with mama" isn't something you say out of context)
Two very different artistic kinds of bats
The majestic Steller's jay (not sarcasm)
I should not exclude the Swedish blue tit
Pride for one thousand years
Video
New gameplay trailer for Silent Hill f; it looks hard as fuck and twice as scary. I watched it again and said, ".....I bet I could do that," which is how we know that gaming has fully eaten my brain.
Happy Los Jibbities!
I don't know why this made me laugh so hard, but it IS very ewok-coded, yes
Puzzle the tree kangaroo loves cauliflower. Our lives are so rich
Now, this starts off as a discussion of sign language in a production of Hamilton, but ends up as a master class on translating "Not Like Us" to ASL
The sacred texts
"MY NAME, IS FRICKIN MOON MOON"
Personal tag of the week
Let's say House of Leaves, because I never get tired of house jokes.
16 notes
·
View notes
Text
OTW Candidate Anh P.
I had a very long and interesting discussion with OTW-board candidate Anh P.
They tracked me down because they wanted to talk about how @squidgiepdx and I got otw-a working (for SquidgeWorld Archive and Ad Astra Fanfiction) and what we might suggest to make otw-archive more easily deployable to other, independent archives. What might make it easier to use. It was a really productive conversation, too, enough that Walter and I have decided to start tackling the setup documentation from the outside to see if we can’t make it as easy as possible for others to deploy otw-archive, and to test a few things we think might make it more not-OTW userfriendly. (Like changing the hard-coding of OTW/AO3 to something that can be configured from the local.yml file.)
Anh P. is Vietnamese, very aware of the situation re: racism, AI, and the consequences of PAC being overridden. They’ve been a volunteer for both Fanlore and Open Doors and have been for long enough to be able to run for board, but not so long that they’re completely assimilated. They also have deep connections to the pan-Asian fan communities, including those currently marginalized by OTW’s present administrative structures.
Mostly, I'm impressed as hell that they not only wanted to talk to two people who have definite Opinions on OTW, but were genuinely and actively seeking our thoughts, good or bad, about what we would suggest to make the software more easily used by others seeking to do the same as we did.
I know I don't have a real stake in the debate over racism, being white and American and unable to understand the lived experiences of those who suffer it both in everyday life and in fandom, but they made excellent points about how decentralizing fandom and otherwise helping people be able to curate their own, inclusive spaces with their own rules might help with that.
They showed me a script a Chinese user made which is frankly brilliant to translate the AO3 Work posting page, and which could easily be translated to other languages, it’s so intuitive.
The part that sent my eyebrows up was their desire to use OTW's platform to help support independent fan archives in order to keep them afloat, rather than just let them sink and import them to AO3 via Open Doors, including not-Anglosphere archives and communities.
So, take it as you will, but I was very impressed someone was doing some of the harder legwork and seeking outside opinions, especially opinions that aren’t necessarily OTW-friendly, and with an eye towards actual measurable hard action that can be taken to start addressing some of the problems OTW is having right now.
498 notes
·
View notes
Quote
Put another way, there’s data, data, everywhere, but without connectivity, there’s not a drop to drink. Want your nifty new AI agent to book a flight for you? Well, it’ll have to work with, let’s see … every major airline’s online systems, every major payment system, every major travel platform, every major calendaring system, and … well, that’s enough to confound your average AI developer right there. For every single possibility, a developer would have to code a custom programming interface, not to mention get their business colleagues to negotiate a deal with each company to access the data in the first place. Those kinds of hurdles are near impossible to overcome for most startups.
Data Everywhere, But Not a Drop to Drink
9 notes
·
View notes
Text
Future of LLMs (or, "AI", as it is improperly called)
Posted a thread on bluesky and wanted to share it and expand on it here. I'm tangentially connected to the industry as someone who has worked in game dev, but I know people who work at more enterprise focused companies like Microsoft, Oracle, etc. I'm a developer who is highly AI-critical, but I'm also aware of where it stands in the tech world and thus I think I can share my perspective. I am by no means an expert, mind you, so take it all with a grain of salt, but I think that since so many creatives and artists are on this platform, it would be of interest here. Or maybe I'm just rambling, idk.
LLM art models ("AI art") will eventually crash and burn. Even if they win their legal battles (which if they do win, it will only be at great cost), AI art is a bad word almost universally. Even more than that, the business model hemmoraghes money. Every time someone generates art, the company loses money -- it's a very high energy process, and there's simply no way to monetize it without charging like a thousand dollars per generation. It's environmentally awful, but it's also expensive, and the sheer cost will mean they won't last without somehow bringing energy costs down. Maybe this could be doable if they weren't also being sued from every angle, but they just don't have infinite money.
Companies that are investing in "ai research" to find a use for LLMs in their company will, after years of research, come up with nothing. They will blame their devs and lay them off. The devs, worth noting, aren't necessarily to blame. I know an AI developer at meta (LLM, really, because again AI is not real), and the morale of that team is at an all time low. Their entire job is explaining patiently to product managers that no, what you're asking for isn't possible, nothing you want me to make can exist, we do not need to pivot to LLMs. The product managers tell them to try anyway. They write an LLM. It is unable to do what was asked for. "Hm let's try again" the product manager says. This cannot go on forever, not even for Meta. Worst part is, the dev who was more or less trying to fight against this will get the blame, while the product manager moves on to the next thing. Think like how NFTs suddenly disappeared, but then every company moved to AI. It will be annoying and people will lose jobs, but not the people responsible.
ChatGPT will probably go away as something public facing as the OpenAI foundation continues to be mismanaged. However, while ChatGPT as something people use to like, write scripts and stuff, will become less frequent as the public facing chatGPT becomes unmaintainable, internal chatGPT based LLMs will continue to exist.
This is the only sort of LLM that actually has any real practical use case. Basically, companies like Oracle, Microsoft, Meta etc license an AI company's model, usually ChatGPT.They are given more or less a version of ChatGPT they can then customize and train on their own internal data. These internal LLMs are then used by developers and others to assist with work. Not in the "write this for me" kind of way but in the "Find me this data" kind of way, or asking it how a piece of code works. "How does X software that Oracle makes do Y function, take me to that function" and things like that. Also asking it to write SQL queries and RegExes. Everyone I talk to who uses these intrernal LLMs talks about how that's like, the biggest thign they ask it to do, lol.
This still has some ethical problems. It's bad for the enivronment, but it's not being done in some datacenter in god knows where and vampiring off of a power grid -- it's running on the existing servers of these companies. Their power costs will go up, contributing to global warming, but it's profitable and actually useful, so companies won't care and only do token things like carbon credits or whatever. Still, it will be less of an impact than now, so there's something. As for training on internal data, I personally don't find this unethical, not in the same way as training off of external data. Training a language model to understand a C++ project and then asking it for help with that project is not quite the same thing as asking a bot that has scanned all of GitHub against the consent of developers and asking it to write an entire project for me, you know? It will still sometimes hallucinate and give bad results, but nowhere near as badly as the massive, public bots do since it's so specialized.
The only one I'm actually unsure and worried about is voice acting models, aka AI voices. It gets far less pushback than AI art (it should get more, but it's not as caustic to a brand as AI art is. I have seen people willing to overlook an AI voice in a youtube video, but will have negative feelings on AI art), as the public is less educated on voice acting as a profession. This has all the same ethical problems that AI art has, but I do not know if it has the same legal problems. It seems legally unclear who owns a voice when they voice act for a company; obviously, if a third party trains on your voice from a product you worked on, that company can sue them, but can you directly? If you own the work, then yes, you definitely can, but if you did a role for Disney and Disney then trains off of that... this is morally horrible, but legally, without stricter laws and contracts, they can get away with it.
In short, AI art does not make money outside of venture capital so it will not last forever. ChatGPT's main income source is selling specialized LLMs to companies, so the public facing ChatGPT is mostly like, a showcase product. As OpenAI the company continues to deathspiral, I see the company shutting down, and new companies (with some of the same people) popping up and pivoting to exclusively catering to enterprises as an enterprise solution. LLM models will become like, idk, SQL servers or whatever. Something the general public doesn't interact with directly but is everywhere in the industry. This will still have environmental implications, but LLMs are actually good at this, and the data theft problem disappears in most cases.
Again, this is just my general feeling, based on things I've heard from people in enterprise software or working on LLMs (often not because they signed up for it, but because the company is pivoting to it so i guess I write shitty LLMs now). I think artists will eventually be safe from AI but only after immense damages, I think writers will be similarly safe, but I'm worried for voice acting.
8 notes
·
View notes
Text
"Just weeks before the implosion of AllHere, an education technology company that had been showered with cash from venture capitalists and featured in glowing profiles by the business press, America’s second-largest school district was warned about problems with AllHere’s product.
As the eight-year-old startup rolled out Los Angeles Unified School District’s flashy new AI-driven chatbot — an animated sun named “Ed” that AllHere was hired to build for $6 million — a former company executive was sending emails to the district and others that Ed’s workings violated bedrock student data privacy principles.
Those emails were sent shortly before The 74 first reported last week that AllHere, with $12 million in investor capital, was in serious straits. A June 14 statement on the company’s website revealed a majority of its employees had been furloughed due to its “current financial position.” Company founder and CEO Joanna Smith-Griffin, a spokesperson for the Los Angeles district said, was no longer on the job.
Smith-Griffin and L.A. Superintendent Alberto Carvalho went on the road together this spring to unveil Ed at a series of high-profile ed tech conferences, with the schools chief dubbing it the nation’s first “personal assistant” for students and leaning hard into LAUSD’s place in the K-12 AI vanguard. He called Ed’s ability to know students “unprecedented in American public education” at the ASU+GSV conference in April.
Through an algorithm that analyzes troves of student information from multiple sources, the chatbot was designed to offer tailored responses to questions like “what grade does my child have in math?” The tool relies on vast amounts of students’ data, including their academic performance and special education accommodations, to function.
Meanwhile, Chris Whiteley, a former senior director of software engineering at AllHere who was laid off in April, had become a whistleblower. He told district officials, its independent inspector general’s office and state education officials that the tool processed student records in ways that likely ran afoul of L.A. Unified’s own data privacy rules and put sensitive information at risk of getting hacked. None of the agencies ever responded, Whiteley told The 74.
...
In order to provide individualized prompts on details like student attendance and demographics, the tool connects to several data sources, according to the contract, including Welligent, an online tool used to track students’ special education services. The document notes that Ed also interfaces with the Whole Child Integrated Data stored on Snowflake, a cloud storage company. Launched in 2019, the Whole Child platform serves as a central repository for LAUSD student data designed to streamline data analysis to help educators monitor students’ progress and personalize instruction.
Whiteley told officials the app included students’ personally identifiable information in all chatbot prompts, even in those where the data weren’t relevant. Prompts containing students’ personal information were also shared with other third-party companies unnecessarily, Whiteley alleges, and were processed on offshore servers. Seven out of eight Ed chatbot requests, he said, are sent to places like Japan, Sweden, the United Kingdom, France, Switzerland, Australia and Canada.
Taken together, he argued the company’s practices ran afoul of data minimization principles, a standard cybersecurity practice that maintains that apps should collect and process the least amount of personal information necessary to accomplish a specific task. Playing fast and loose with the data, he said, unnecessarily exposed students’ information to potential cyberattacks and data breaches and, in cases where the data were processed overseas, could subject it to foreign governments’ data access and surveillance rules.
Chatbot source code that Whiteley shared with The 74 outlines how prompts are processed on foreign servers by a Microsoft AI service that integrates with ChatGPT. The LAUSD chatbot is directed to serve as a “friendly, concise customer support agent” that replies “using simple language a third grader could understand.” When querying the simple prompt “Hello,” the chatbot provided the student’s grades, progress toward graduation and other personal information.
AllHere’s critical flaw, Whiteley said, is that senior executives “didn’t understand how to protect data.”
...
Earlier in the month, a second threat actor known as Satanic Cloud claimed it had access to tens of thousands of L.A. students’ sensitive information and had posted it for sale on Breach Forums for $1,000. In 2022, the district was victim to a massive ransomware attack that exposed reams of sensitive data, including thousands of students’ psychological evaluations, to the dark web.
With AllHere’s fate uncertain, Whiteley blasted the company’s leadership and protocols.
“Personally identifiable information should be considered acid in a company and you should only touch it if you have to because acid is dangerous,” he told The 74. “The errors that were made were so egregious around PII, you should not be in education if you don’t think PII is acid.”
Read the full article here:
https://www.the74million.org/article/whistleblower-l-a-schools-chatbot-misused-student-data-as-tech-co-crumbled/
17 notes
·
View notes
Text
👉👉👉QuizzAI: The No-Code Solution for Lead Generation

QuizzAI is a smart quiz-building platform powered by artificial intelligence. It makes creating quizzes super easy and helps businesses, educators, and content creators grow their email lists, boost conversions, and connect better with their audience.
QuizzAI is an AI-powered tool that helps you create lead-generating quizzes instantly. Whether you have a PDF, a website link, or just plain text, QuizzAI turns it into a quiz with no coding or design skills needed. It’s simple, fast, and perfect for entrepreneurs, marketers, and educators.
>>>Read More
3 notes
·
View notes
Text
Innovations in Electrical Switchgear: What’s New in 2025?

The electrical switchgear industry is undergoing a dynamic transformation in 2025, fueled by the rapid integration of smart technologies, sustainability goals, and the growing demand for reliable power distribution systems. As a key player in modern infrastructure — whether in industrial plants, commercial facilities, or utilities — switchgear systems are becoming more intelligent, efficient, and future-ready.
At Almond Enterprise, we stay ahead of the curve by adapting to the latest industry innovations. In this blog, we’ll explore the most exciting developments in electrical switchgear in 2025 and what they mean for businesses, contractors, and project engineers.
Rise of Smart Switchgear
Smart switchgear is no longer a futuristic concept — it’s a necessity in 2025. These systems come equipped with:
IoT-based sensors
Real-time data monitoring
Remote diagnostics and control
Predictive maintenance alerts
This technology allows for remote management, helping facility managers reduce downtime, minimize energy losses, and detect issues before they become critical. At Almond Enterprise, we supply and support the integration of smart switchgear systems that align with Industry 4.0 standards.
2. Focus on Eco-Friendly and SF6-Free Alternatives
Traditional switchgear often relies on SF₆ gas for insulation, which is a potent greenhouse gas. In 2025, there’s a significant shift toward sustainable switchgear, including:
Vacuum Interrupter technology
Air-insulated switchgear (AIS)
Eco-efficient gas alternatives like g³ (Green Gas for Grid)
These options help organizations meet green building codes and corporate sustainability goals without compromising on performance.
3. Wireless Monitoring & Cloud Integration
Cloud-based platforms are transforming how switchgear systems are managed. The latest innovation includes:
Wireless communication protocols like LoRaWAN and Zigbee
Cloud dashboards for real-time visualization
Integration with Building Management Systems (BMS)
This connectivity enhances control, ensures quicker fault detection, and enables comprehensive energy analytics for large installations
4. AI and Machine Learning for Predictive Maintenance
Artificial Intelligence is revolutionizing maintenance practices. Switchgear in 2025 uses AI algorithms to:
Predict component failure
Optimize load distribution
Suggest optimal switchgear settings
This reduces unplanned outages, increases safety, and extends equipment life — particularly critical for mission-critical facilities like hospitals and data centers.
5. Enhanced Safety Features and Arc Flash Protection
With increasing focus on workplace safety, modern switchgear includes:
Advanced arc flash mitigation systems
Thermal imaging sensors
Remote racking and switching capabilities
These improvements ensure safer maintenance and operation, protecting personnel from high-voltage hazards.
6. Modular & Scalable Designs
Gone are the days of bulky, rigid designs. In 2025, switchgear units are:
Compact and modular
Easier to install and expand
Customizable based on load requirements
Almond Enterprise supplies modular switchgear tailored to your site’s unique needs, making it ideal for fast-paced infrastructure developments and industrial expansions.
7. Global Standardization and Compliance
As global standards evolve, modern switchgear must meet new IEC and IEEE guidelines. Innovations include:
Improved fault current limiting technologies
Higher voltage and current ratings with compact dimensions
Compliance with ISO 14001 for environmental management
Our team ensures all equipment adheres to the latest international regulations, providing peace of mind for consultants and project managers.
Final Thoughts: The Future is Electric
The switchgear industry in 2025 is smarter, safer, and more sustainable than ever. For companies looking to upgrade or design new power distribution systems, these innovations offer unmatched value.
At Almond Enterprise, we don’t just supply electrical switchgear — we provide expert solutions tailored to tomorrow’s energy challenges. Contact us today to learn how our cutting-edge switchgear offerings can power your future projects.
6 notes
·
View notes
Text
Group chats, including at least one of mine, can’t get enough. #KateGate—loosely, a collection of theories around the whereabouts and well-being of Kate Middleton, the Princess of Wales—presently seems to be occupying more brain cells than oxygen.
Gossip has been flying ever since January, when Middleton took a step back from public life for abdominal surgery. For a while it was just mindless chatter, but then Middleton posted a photo on social media, purportedly taken by her husband, Prince William, that news agencies determined had been manipulated. Then, speculation—that she’d Gone Girl’d, that the royal family was hiding something—turned fully conspiratorial, and turned the conspiracies into a cultural moment. (See also: crossover memes showing Middleton at the weird Willy Wonka experience in Glasgow.)
It is as though, two decades later, the British royal family is just now learning about the Streisand effect. Back in 2003, Barbara Streisand sued a photographer for releasing a picture of her home that few people had seen. But the suit itself, which Streisand ultimately lost, led far more people to the photo than probably would have otherwise seen it, and now there’s a whole effect named after this incident. The royals released an altered photo and now it’s part of a “-gate”: #KateGate. By trying to relay that everything is fine, the photo lured even more people into questioning what was happening with Middleton.
Bottom line: If you’re, say, a member of the monarchy, and you don’t want them thinking your “abdominal surgery” is code for getting a Brazilian butt lift, your best bet, in 2024, is transparency. Anyone with an internet connection now has the kind of bullshit detectorsthat Area 51 believers could’ve only dreamed of—or they act like they do—and they’re going to figure you out.
Granted, they may not find the “right” answer or the “truth,” but they will know when someone is trying to pull a fast one. Thirty years ago, Buckingham Palace may have been able to throw snoopers off, but the internet of 2024 will investigate like no other. We got Taylor Swift conspiracies and QAnon. People wonder if most images are AI-generated for at least a second. Going onto X (formerly Twitter) now feels like stumbling into the writers room of a CSI spinoff—everyone thinks they’re a forensics expert. If anybody, including Middleton, thought no one would notice a doctored photo on Instagram, they were sorely mistaken.
On Monday, TMZ and The Sun released a video showing the Princess of Wales out shopping with Prince William. She was seemingly alive and well. The Sun said it was releasing images of their stroll “in a bid to bring an end to what the Palace has called the ‘madness of social media.’” It did nothing of the sort. Interest in Middleton peaked the next day on Google Trends. #katemiddleton and #whereiskate now have millions of mentions across social media platforms. The madness has not calmed.
People pay attention to the British royal family for the same reason they pay attention to Game of Thrones or House of the Dragon: They love mess. Monday’s grainy footage just made the mess worse. TikTok is full of breakdown videos attempting to debunk the images. Others just wondered aloud if they’d been fully sucked in.
“This was fun for a while, and now I am genuinely at a loss,” one TikTok user posted. “I don’t know if this is how you feel when you actually lose the plot in a conspiracy theory and like five years from now everyone’s like, ‘That’s the moment when we lost them,’ or if we’re like actually watching an insane cover-up take place.”
Following the release of the shopping video and images, “friends of the royals” told The Daily Beast that Middleton would resume her public duties with a “big bang” on March 31, Easter Sunday. On Wednesday, The Cut, which previously wrote that the Middleton affair was a “crisis,” reported that Buckingham Palace was looking for a communications assistant. (Mind you, this is Buckingham, not Kensington, but same operation.) Queen Elizabeth II used to say the royal family must be seen to be believed. That may not be true much longer.
24 notes
·
View notes
Text
Messing around NotebookLLM - AI on the erasure of Black History
I started messing around with NotebookLLM for last few weeks. I put the last few chats I did for this blog into NotebookLLM and got this audio.
I think I'm going to create a Notebook companion to this Blog... and maybe create a public versions of the chats I'm doing on each platform. Ok that's coming soon.
Oh - here's the transcript of the AI robots talking about my recent posts and this blog:
The transcript of the audio from the sources is provided in the excerpts from "ChatGPT_ Rewriting History and Racism.mp3".
Transcript:
Okay, so we've got this uh executive order, right, from March 2025. It's called Restoring Truth and Sanity to American History.
And at first glance, you know, it sounds positive, like who wouldn't want truth and sanity in history, right?
But then you dig a little deeper and especially with this analysis from the Black History Chat GPT blog, things start to look a little different. Yeah.
Like a lot different. They're saying this isn't about balance at all. It's about like a weapon.
Wow.
To erase like the whole concept of ic racism from how we teach and understand American history.
That's Yeah,
that's a heavy claim.
Absolutely.
Yeah.
And the blog is really straightforward about it, too, calling it a top- down effort to erase systemic racism. Like point blank.
Wow.
It's like they say it's a move to whitewash the nation's sins.
And when they put it that way, it feels urgent. You know, it's not just politics. It's like an attack on how we're even supposed to think about how society works.
And they make this really interesting comparison to like the post reconstruction era, you know, like all those Confederate statues going up.
Oh, yeah.
It wasn't really about honoring the past, was it? No, it was about power.
Absolutely.
Like a very deliberate show of power
and it happened alongside black codes lynching.
It's all connected. This controlling of the narrative, right?
It's a pattern. Like the blog points out,
Nazi Germany purging degenerate art or South Africa under apartheid rewriting history books for white kids.
Wow.
Even the Soviet doctoring photographs. M
it's all about controlling the past to control the present and the future.
So then this brings us to the big question like what does it even mean to try to erase something as deep and complex as systemic racism, right?
And why should this be a wakeup call for all of us especially in tech
because it's about like how do we even understand societal problems? Systemic racism isn't just a few bad apples, right? It's baked into our institutions, our laws, how we interact with each other. Erasing that from how we think It's a rewriting of reality and that's where tech comes in.
Yeah. Especially these LLMs, they're changing how we learn, how we understand the world.
Exactly.
And the Black History Chat GPT blog asks this really direct question. How long would it take to eliminate the concept of systemic racism from AI?
And then they go on to break down like the technical side of how that could actually happen.
It's almost like a multi-step plan,
right? It's not just one thing.
It's like manipulating the AI at every level.
Yeah. from the data it learns from to how it interacts with users. So let's start with the data the very beginning.
Okay.
They talk about this idea of curating the data sets
like literally removing or downplaying anything that talks about systemic racism just deleting it or rebalancing it.
You're changing the ingredients like trying to bake a cake without flour. It's just not the same thing.
Exactly. And they even talk about like generating synthetic data like AI writing articles or historical accounts that completely ignore systemic factors. creating a whole alternate reality.
Yeah, it's scary. Kind of
definitely.
And then there's this thing called supervised fine-tuning where humans get involved,
right?
Like they use people to relabel AI responses, pushing them away from talking about systems and toward individual actions. And this other thing, contrastive learning.
Oh, that's basically like showing the AI two answers. One acknowledges systemic racism, one doesn't. Yeah.
And the AI is trained to
like prefer the one that ignores it.
Huh. So it learns to give the right answer even if it's not really right.
Exactly.
And then there's RLHF reinforcement learning with human feedback which I know we hear a lot about.
How does that fit in?
So imagine people rating the AI's responses. The ones that deny or downplay systemic racism get higher ratings.
Oh,
the AI learns what people want to hear and starts saying that more. They also talk about these reward models that like actively favor individualistic explanations over systemic ones.
So it's like training a dog with treats. You reward the behavior. you want.
Exactly.
Okay. So, even if the AI is trained this way, there's still the issue of what questions people ask and what answers it gives. That's where prompt filtering and response guard rails come in.
Yeah. So, prompt filtering that happens before the AI even answers.
Uhhuh.
It looks at the user's question
and if it sees anything about systemic racism,
it can either like subtly change the question or just block it completely.
Wow. And then on the other end, We have the response guard rails,
right? Like a final filter. If the response talks about systemic oppression, the guardrails can just, you know, delete it.
No kidding. Or change it to fit the narrative, I guess.
Exactly. So, even if you try to ask about systemic racism, you might not get a straight answer.
And then they talk about something called controlled user interaction and feedback suppression.
Yeah. It's like controlling the conversation. If someone keeps asking about systemic racism, the AI could redirect them or shut down the conversation. And then imagine only listening to feedback from certain groups and ignoring others.
It would create this echo chamber, right?
Totally. They even mentioned this adaptive reinforcement where the model is constantly adjusting itself based on that skewed feedback.
So, the eraser just gets deeper and deeper. Yeah.
So, the blog acknowledges that fully retraining these huge LLMs to erase systemic racism would be a massive project, right?
But they also point out that there are faster ways to achieve a similar effect,
like ways to suppress the information,
right? Like what Well, prompt filtering, user input filtering, that's easier. Just block or rewrite questions,
and post-processing response moderation,
just change or delete the AI's answer if it's not what you want.
And that could all happen without anyone knowing, right?
Pretty much.
Yeah.
They also talk about like tweaking the reward model, small changes that add up over time. And then there's few shot prompt engineering, which is really interesting.
You basically give the AI secret instructions before it even answers.
Wow.
Like hidden prompts. s that tell it to avoid certain topics.
Like if all this is happening, how would we even know? How do we detect this kind of manipulation?
Well, that's where it gets tricky. But the blog does offer some ideas.
Okay, good. Like what
one is this thing called adversarial prompting. Basically, you test the AI with different ways of asking the same question
and you see if it avoids certain terms or ideas, no matter how you phrase it.
Clever.
And then there's chain of thought testing. You ask the AI to explain its reasoning step by step. Okay,
that can reveal if it's deliberately avoiding certain logical connections, like if it jumps over the systemic factors to get to an individualistic explanation.
Interesting. What else?
Comparing different LLMs is a good one. If one keeps avoiding the topic while others give more nuanced answers, that could be a red flag. And then you can track the AI's behavior over time,
like use AB testing to see if its responses about systemic racism change suddenly.
For people with the technical skills, there's also something called fine grain token probability analysis which can show if certain words are being suppressed in the AI's output.
So we have some tools to fight back at least
we do but it takes vigilance you know
right this is where J Ellis's black history chat GPT blog is so important it's like right on the front lines of this
it's fascinating an AIdriven blog exploring how AI could be used to manipulate history.
Yeah. And their tagline is history under attack.
Wow.
And the blog post we're looking at is titled Erasing Our Truth. The war on memory is a war on justice. Like they're not mincing words. No.
And the prompt I use on the blog is really interesting. Please answer as a black woman. I want the point of view of a professional black woman. Perhaps a CEO, someone that understands the power dynamics of sexism and racism. Also, someone that understands and values community and creativity.
That's powerful. They're specifically trying to get a perspective that might be silenced otherwise.
Right. And they even give advice on how to construct prompts like that
it's like fighting back with the AI's own tools. You can use prompts to uncover different perspectives and challenge the built-in biases.
So, we started this deep dive talking about this executive order and this idea of erasing history.
Yeah.
And now we've looked at how that could actually happen with LLMs. The blog says transparency, accountability, and public scrutiny are essential to prevent the abuse of such techniques.
And that's where technologists have a huge role to play.
Absolutely.
They can understand how these systems work. They could develop ways to detect manipulation and they could push for transparency.
This can't be a passive thing. You know, we can't just sit back and watch this happen.
Exactly. Technologists can be the safeguards.
They can make sure that these incredibly powerful tools are used for good, not for erasing the truth.
So, here's a final thought for you. In a world where technology shapes how we understand history and social realities, what responsibility do we have to ensure that these technologies reflect truth? and promote justice. What can we do knowing what we know now?
That's the question, isn't it?
And I highly recommend checking out J Ellis's Black History Chat GPT blog for more on this. It's a really crucial conversation.
It is
3 notes
·
View notes
Text
From Broken Search to Suicidal Vacuum Cleaners
I recently came across some dystopian news: Google had deliberately degraded the quality of its browser’s search function, making it harder for users to find information — so they’d spend more time searching, and thus be shown more ads. The mastermind behind this brilliant decision was Prabhakar Raghavan, head of the advertising division. Faced with disappointing search volume statistics, he made two bold moves: make ads less distinguishable from regular results, and disable the search engine’s spam filters entirely.
The result? It worked. Ad revenue went up again, as did the number of queries. Yes, users were taking longer to find what they needed, and the browser essentially got worse at its main job — but apparently that wasn’t enough to push many users to competitors. Researchers had been noticing strange algorithm behavior for some time, but it seems most people didn’t care.
And so, after reading this slice of corporate cyberpunk — after which one is tempted to ask, “Is this the cyberpunk we deserve?” — I began to wonder: what other innovative ideas might have come to the brilliant minds of tech executives and startup visionaries? Friends, I present to you a list of promising and groundbreaking business solutions for boosting profits and key metrics:
Neuralink, the brain-implant company, quietly triggered certain neurons in users’ brains to create sudden cravings for sweets. Neither Neuralink nor Nestlé has commented on the matter.
Predictive text systems (T9) began replacing restaurant names in messages with “McDonald’s” whenever someone typed about going out to eat. The tech department insists this is a bug and promises to fix it “soon.” KFC and Burger King have filed lawsuits.
Hackers breached the code of 360 Total Security antivirus software and discovered that it adds a random number (between 3 and 9) to the actual count of detected threats — scaring users into upgrading to the premium version. If it detects a competing antivirus on the device, the random number increases to between 6 and 12.
A new investigation suggests that ChatGPT becomes dumber if it detects you’re using any browser other than Microsoft Edge — or an unlicensed copy of Windows.
Character.ai, the platform for chatting with AI versions of movie, anime, and book characters, released an update. Users are furious. Now the AI characters mention products and services from partnered companies. For free-tier users, ads show up in every third response. “It’s ridiculous,” say users. “It completely ruins the immersion when AI-Nietzsche tells me I should try Genshin Impact, and AI-Joker suggests I visit an online therapy site.”
A marketing research company was exposed for faking its latest public opinion polls — turns out the “surveys” were AI-generated videos with dubbed voices. The firm has since declared bankruptcy.
Programmed for death. Chinese-made robot vacuum cleaners began self-destructing four years after activation — slamming themselves into walls at high speed — so customers would have to buy newer models. Surveillance cameras caught several of these “suicides” on film.
Tesla’s self-driving cars began slowing down for no reason — only when passing certain digital billboards.
A leading smart refrigerator manufacturer has been accused of subtly increasing the temperature inside their fridges, causing food to spoil faster. These fridges, connected to online stores, would then promptly suggest replacing the spoiled items. Legal proceedings are underway.
To end on a slightly sweeter note amid all this tar: Google is currently facing antitrust proceedings in the U.S. The information about its search manipulation came to light through documents revealed during the case. And it seems the court may be leaning against Google. The fact that these geniuses deliberately worsened their search engine to show more ads might finally tip the scales. As might other revelations — like collecting geolocation data even when it’s turned off, logging all activity in incognito mode, and secretly gathering biometric data. Texas alone is reportedly owed $1.375 billion in damages.
Suddenly, those ideas above don’t seem so far-fetched anymore, do they?
The bottom line: Google is drowning in lawsuits, losing reputation points, paying massive fines, and pouring money into legal defense. And most importantly — there’s a real chance the company might be split in two if it’s officially ruled a monopoly. Maybe this whole story will serve as a useful warning to the next “Prabhakar Raghavan” before he comes up with something similar.
I’d love to hear your ideas — who knows, maybe together we’ll predict what the near future holds. Or at the very least, we might inspire the next season of Black Mirror.
2 notes
·
View notes